forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pull #5
Merged
Merged
Pull #5
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Separate fusion and compilation * fix description of graph_fuse.h * fix lint * fix @masahi 's comments, move fusion out of target * fix graph passing and make fused_entries singula in graph attr * fix typo * fix some comments * run test again * remove rvalue for graphfuse and graphfindfusiablegroups
* [TOPI] add injective scheduler for HLS backends * Introduced PrintBinaryExpr
* Use int for int8x4 due to performance overhead of char4 * Add a comment about using int * Remove invalid test
* [NNVM][TENSORFLOW] Optimized tensorflow testcases * Replace Constants with Placeholder * Review comment fix
* [NNVM][TEST] Numerical gradient testing * [NNVM][TEST] Make some tests a little faster * Fix the failing test_top_level3 * Target exclusion for the check_function * Try to ignore singularities * grad_input_vars now can't contain shapes * Don't pass unnecessary grad_input_vars to check_function * Multiple outputs; fixes; testing of check_function * Use numerical_grads_params to pass parameters to numgrad checker * Fail when no action is requested excplicitly * Pass additional params to functions * Silence the linter issue * Simplified numgrad checking * Improved docs for check_function * Fixed the error message when no dtype is provided * Several fixes * Tests with shape/dtype inference for inputs * Don't check dense's grads on cuda * Raise an error if output dtypes haven't been inferred * Moved shape/dtype inference into a separate function; use float32 as fallback * Remove redundant dtype=float32 * Fix multiple outputs * Use check_function in the rest of the test_top_level1
The old queue size is too small. It will stall the executor due to race condition.
…on registry, default fallback of IRPrinter. (#1652)
* [CODEGEN][AOCL] Add math intrinsic rules * introduce aocl_emu target for AOCL emulation * rename aocl_emu with aocl_sw_emu * update docs
… it (#1654) * [TENSORFLOW] fix the convertion of sum and add testcase for it * delete checking tyoe of axis and divide reduce test
* add docstring skip in hybrid script * fix lint
* [TOPI] add nn schedulers for HLS backends * fix pylint * fix topi transform test
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from others in the community.